13 research outputs found

    Understanding the Challenges of TV White Space Databases for Mobile Usage

    Get PDF
    Transition to the Digital Television (DTV) has freed up large spectrum bands, known as a digital dividend. These frequencies are now available for opportunistic use and referred to as Television White Space (TVWS). The usage of the TVWS is regulated by licensing, and there are primary users, mostly TV broadcasters, that have bought the license to use certain channels, and secondary users, who can use channels that primary users are not currently utilizing. The coexistence can be facilitated either by spectrum sensing or White Space Databases (WSDBs) and in this thesis, we are concentrating on the latter. Technically, WSDB is a geolocational database that stores location and other relevant transmitter characteristics of primary users, such as antenna height and transmission power. WSDB calculates safety zone of the primary user by applying radio wave propagation model to the stored information. The secondary user sends a request to WSDB containing its location and receives a list of available channels. The main problem we are going to concentrate on is specific challenges that mobile devices face in using WSDBs. Current regulations demand that after moving each 100 meters, the mobile device has to query WSDB, consequently increasing device's energy consumption and network load. Fast moving devices confront the even more severe problem: there is always some delay in communications with WSDB, and it is possible that while waiting for the response the device moves another 100 meters. In that case, instead of using the reply the device has to query the WSDB again. For fast moving devices (e.g. contained inside vehicles) the vicious loop can continue indefinitely long, resulting in an inability to use TVWS at all. A. Majid has proposed predictive optimization algorithm called Nuna to deal with the problem. Our approach is different, we investigate spatiotemporal variations of the spectrum and basing on over than six months of observations we suggest the spectrum caching technique. According to our data, there are minimal temporal variations in TVWS spectrum, and that makes caching very appealing. We also sketch technical details for a possible spectrum caching solution

    Open Infrastructure for Edge Computing

    Get PDF
    Edge computing, bringing the computation closer to end-users and data producers, has now firmly gained the status of enabling technology for the new kinds of emerging applications, such as Virtual/Augmented Reality and IoT. The motivation backing this rapidly developing computing paradigm is mainly two-fold. On the one hand, the goal is to minimize the latency that end-users experience, not only improving the quality of service but empowering new kinds of applications, which would not even be possible given higher delays. On the other, edge computing aims to save core networking bandwidth from being overwhelmed by myriads of IoT devices, sending their data to the cloud. After analyzing and aggregating IoT streams at edge servers, much less networking capacity will be required to persist remaining information in distant cloud datacenters. Having a solid motivation and experiencing continuous interest from both academia and industry, edge computing is still in its nascency. To leave adolescence and take its place on a par with the cloud computing paradigm, finally forming a versatile edge-cloud environment, the newcomer needs to overcome a number of challenges. First of all, the computing infrastructure to deploy edge applications and services is very limited at the moment. Indeed, there are initiatives supported by the telecommunication industry, like Multi-access Edge Computing. Also, cloud providers plan to establish their facilities near the edge of the network. However, we believe that even more efforts will be required to make edge servers generally available. Second, to emerge and function efficiently, the ecosystem of edge computing needs practices, standards, and governance mechanisms of its own kind. The specificity originates from the highly dispersed nature of the edge, implying high heterogeneity of resources and diverse administrative control over the computing facilities. Finally, the third challenge is the dynamicity of the edge computing environment due to, e.g., varying demand, migrating clients, etc. In this thesis, we outline underlying principles of what we call Open Infrastructure for Edge (OpenIE), identify its key features, and provide solutions for them. Intended to tackle the challenges we mentioned above, OpenIE defines a set of common practices and loosely coupled technologies creating a unified environment out of highly heterogeneous and administratively partitioned edge computing resources. Particularly, we design a protocol capable of discovering edge providers on a global scale. Further, we propose a framework of Ingelligent Containers (ICONs), capable of autonomous decision making and forming a service overlay on a large-scale edge-cloud setting. As edge providers need to be economically incentivized, we devise a truthful double auction mechanism where edge providers can meet application owners or administrators in need of deploying an edge service. Due to truthfulness, in our auction, it is the best strategy for all participants to bid one's privately known valuation (or cost), thus making complex market behavior strategies obsolete. We analyze the potential of distributed ledgers to serve for OpenIE decentralized agreement and transaction handling and show how our auction can be implemented with the help of distributed ledgers. With the key building blocks of OpenIE, mentioned above, we hope to make an entrance for anyone interested in service provisioning at the edge as easy as possible. We hope that with the emergence of independent edge providers, edge computing will finally become pervasive.Reunalaskenta, joka tuo laskentakapasiteettia lähemmäksi loppukäyttäjiä ja datan tuottajia, on noussut uudentyyppisten sovelluksien, kuten virtuaalisen ja lisätyn todellisuuden (VR/AR) sekä esineiden internetin (IoT) keskeiseksi mahdollistajaksi. Reunalaskennan kehitystä tukevat pääosin kaksi sen tuomaa etua. Ensiksi, reunalaskenta minimoi loppukäyttäjien kokemaa latenssia mahdollistaen uudentyyppisiä sovelluksia. Toiseksi, reunalaskenta säästää ydinverkon tiedonsiirtokapasiteettia, esimerkiksi IoT-laitteiden pilveen lähettämien tietojen osalta. Kun reunapalvelimet analysoivat ja aggregoivat IoT-virrat, verkkokapasiteettia tarvitaan paljon vähemmän. Reunalaskentaan on panostettu paljon, sekä teollisuuden, että tutkimuksen osalta. Reunalaskennan kehittymispolulla monipuoliseksi reunapilviympäristöksi on edessä useita haasteita. Ensinnäkin laskentakapasiteetti tietoverkkojen reunalla on tällä hetkellä hyvin rajallinen. Vaikka teleoperaattorit ja pilvipalvelujen tarjoajat suunnittelevat lisäävänsä laskentakapasiteettia reunalaskennan tarpeisiin, uskomme kuitenkin, että enemmän ponnisteluja tarvitaan, jotta reunalaskennan edut olisivat yleisesti saatavilla. Toiseksi, toimiakseen tehokkaasti, reunalaskennan ekosysteemi tarvitsee omat käytäntönsä, standardinsa ja hallintamekanisminsa. Reunalaskenan erityistarpeet johtuvat resurssien heterogeenisyydestä, niiden suuresta maantieteellisesta hajautuksesta ja hallinnollisesta jaosta. Kolmas haaste on reunalaskentaympäristön dynaamisuus, joka johtuu esimerkiksi vaihtelevasta kysynnästä ja asiakkaiden liikkuvuudesta. Tässä väitöstutkimuksessa esittelemme Avoimen Infrastruktuurin Reunalaskennalle (OpenIE), joka vastaa edellä mainittuihin haasteisiin, ja tunnistamme ongelman pääominaisuudet ja tarjoamme niihin ratkaisuja. OpenIE määrittelee joukon yleisiä käytäntöjä ja löyhästi yhdistettyjä tekniikoita, jotka luovat yhtenäisen ympäristön erittäin heterogeenisistä ja hallinnollisesti jaetuista reunalaskentaresursseista. Suunnittelemme protokollan, joka kykenee etsimään reunaoperaattoreita maailmanlaajuisesti. Lisäksi ehdotamme Älykontti (ICON) -kehystä, joka kykenee itsenäiseen päätöksentekoon ja muodostaa palvelupäällysteen laajamittaisessa reunapilviympäristössä. Koska reunaoperaattoreita on kannustettava taloudellisesti, suunnittelemme totuudenmukaisen huutokauppamekanismin, jossa reunapalveluntarjoajat voivat kohdata sovellusten omistajia tai järjestelmien omistajia, jotka tarvitsevat reunalaskentakapasiteettia. Totuudenmukaisessa huutokaupassa paras strategia kaikille osallistujille on tehdä tarjous yksityisesti tunnetun arvostuksen perusteella, mikä tekee monimutkaisen markkinastrategian kehittämisen tarpeettomaksi. Analysoimme lohkoketjualustojen potentiaalia palvella OpenIE:n hajautetun sopimisen ja tapahtumien käsittelyä ja näytämme, miten huutokauppamme voidaan toteuttaa lohkoketjuteknologia hyödyntäen. Edellä mainittujen OpenIE:n keskeisten kompponenttien avulla pyrimme luomaan yleisiä puitteita joiden avulla jokainen reunalaskennan kapasiteetin tarjoamisesta kiinnostunut taho voisi ryhtyä palveluntarjojaksi helposti. Riippumattomien reunapalveluntarjoajien mukaantulo tekisi reunalaskennan lupaamat hyödyt yleisesti saataviksi

    Open Infrastructure for Edge : A Distributed Ledger Outlook

    Get PDF
    High demand for low latency services and local data processing has given rise for edge computing. As opposed to cloud computing, in this new paradigm computational facilities are located close to the end-users and data producers, on the edge of the network, hence the name. The critical issue for the proliferation of edge computing is the availability of local computational resources. Major cloud providers are already addressing the problem by establishing facilities in the proximity of end-users. However, there is an alternative trend, namely, developing open infrastructure as a set of standards, technologies, and practices to enable any motivated parties to offer their computational capacity for the needs of edge computing. Open infrastructure can give an additional boost to this new promising paradigm and, moreover, help to avoid problems for which cloud computing has been long criticized for, such as vendor lock-in or privacy. In this paper, we discuss the challenges related to creating such an open infrastructure, in particular focusing on the applicability of distributed ledgers for contractual agreement and payment. Solving the challenge of contracting is central to realizing an open infrastructure for edge computing, and in this paper, we highlight the potential and shortcomings of distributed ledger technologies in the context of our use case

    SPA : Harnessing Availability in the AWS Spot Market

    Get PDF
    Amazon Web Services (AWS) offers transient virtual servers at a discounted price as a way to sell unused spare capacity in its data centers. Although transient servers are very appealing as some instances have up to 90% discount, they are not bound to regular availability guarantees as they are opportunistic resources sold on the spot market. In this paper, we present SPA, a framework that remarkably increases the spot instance reliability over time due to insights gained from the analysis of historical data, such as cross-region price variability and intervals between evictions. We implemented the SPA reliability strategy, evaluated them using over one year of historical pricing data from AWS, and found out that we can increase the transient instance lifetime by adding a pricing overhead of 3.5% in the spot price in the best scenario.Peer reviewe

    Surrounded by the Clouds A Comprehensive Cloud Reachability Study

    Get PDF
    In the early days of cloud computing, datacenters were sparsely deployed at distant locations far from end-users with high end-toend communication latency. However, today's cloud datacenters have become more geographically spread, the bandwidth of the networks keeps increasing, pushing the end-users latency down. In this paper, we provide a comprehensive cloud reachability study as we perform extensive global client-to-cloud latency measurements towards 189 datacenters from all major cloud providers. We leverage the well-known measurement platform RIPE Atlas, involving up to 8500 probes deployed in heterogeneous environments, e.g., home and offices. Our goal is to evaluate the suitability of modern cloud environments for various current and predicted applications. We achieve this by comparing our latency measurements against known human perception thresholds and are able to draw inferences on the suitability of current clouds for novel applications, such as augmented reality. Our results indicate that the current cloud coverage can easily support several latency-critical applications, like cloud gaming, for the majority of the world's population.Peer reviewe

    Edge-Facilitated Augmented Vision in Vehicle-to-Everything Networks

    No full text

    Edge-Facilitated Augmented Vision in Vehicle-to-Everything Networks

    No full text
    corecore